12 research outputs found

    Entropy and the Law of Small Numbers

    Get PDF
    Two new information-theoretic methods are introduced for establishing Poisson approximation inequalities. First, using only elementary information-theoretic techniques it is shown that, when Sn=∑i=1nXiS_n=\sum_{i=1}^nX_i is the sum of the (possibly dependent) binary random variables X1,X2,...,XnX_1,X_2,...,X_n, with E(Xi)=piE(X_i)=p_i and E(S_n)=\la, then \ben D(P_{S_n}\|\Pol)\leq \sum_{i=1}^n p_i^2 + \Big[\sum_{i=1}^nH(X_i) - H(X_1,X_2,..., X_n)\Big], \een where D(P_{S_n}\|{Po}(\la)) is the relative entropy between the distribution of SnS_n and the Poisson(\la) distribution. The first term in this bound measures the individual smallness of the XiX_i and the second term measures their dependence. A general method is outlined for obtaining corresponding bounds when approximating the distribution of a sum of general discrete random variables by an infinitely divisible distribution. Second, in the particular case when the XiX_i are independent, the following sharper bound is established, \ben D(P_{S_n}\|\Pol)\leq \frac{1}{\lambda} \sum_{i=1}^n \frac{p_i^3}{1-p_i}, % \label{eq:abs2} \een and it is also generalized to the case when the XiX_i are general integer-valued random variables. Its proof is based on the derivation of a subadditivity property for a new discrete version of the Fisher information, and uses a recent logarithmic Sobolev inequality for the Poisson distribution.Comment: 15 pages. To appear, IEEE Trans Inform Theor

    Horizon-Independent Optimal Prediction with Log-Loss in Exponential Families

    Full text link
    We study online learning under logarithmic loss with regular parametric models. Hedayati and Bartlett (2012b) showed that a Bayesian prediction strategy with Jeffreys prior and sequential normalized maximum likelihood (SNML) coincide and are optimal if and only if the latter is exchangeable, and if and only if the optimal strategy can be calculated without knowing the time horizon in advance. They put forward the question what families have exchangeable SNML strategies. This paper fully answers this open problem for one-dimensional exponential families. The exchangeability can happen only for three classes of natural exponential family distributions, namely the Gaussian, Gamma, and the Tweedie exponential family of order 3/2. Keywords: SNML Exchangeability, Exponential Family, Online Learning, Logarithmic Loss, Bayesian Strategy, Jeffreys Prior, Fisher Information1Comment: 23 page

    Efficiency of entropy testing

    Get PDF
    Le but de cet article est d’analyser la façon dont l’État Nouveau portugais (Estado Novo, 1933-1974) a transformé les institutions politiques héritées de la république parlementaire et a fondé de nouveaux organismes destinés au contrôle et à la punition de la dissidence politique. Pour décrire ce processus de « transition autoritaire », on analyse en premier lieu la création de lois et d’institutions répressives afin d’introduire une problématisation de ses effets contradictoires sur le militantisme. Si la description de la législation et des institutions d’ordre publique est d’ordre général, les effets sont analysés plus précisément dans le cas de la répression et du contrôle de l’activisme estudiantin et de la gauche marxiste (parti communiste et gauche radicale). On analyse enfin les effets sur les trajectoires des militants, de la chute du régime au début de la transition démocratique

    On the Bahadur-Efficient Testing of Uniformity by Means of the Entropy

    No full text

    Thinning, Entropy, and the Law of Thin Numbers

    No full text
    corecore